186 research outputs found
Learnable MFCCs for Speaker Verification
We propose a learnable mel-frequency cepstral coefficient (MFCC) frontend
architecture for deep neural network (DNN) based automatic speaker
verification. Our architecture retains the simplicity and interpretability of
MFCC-based features while allowing the model to be adapted to data flexibly. In
practice, we formulate data-driven versions of the four linear transforms of a
standard MFCC extractor -- windowing, discrete Fourier transform (DFT), mel
filterbank and discrete cosine transform (DCT). Results reported reach up to
6.7\% (VoxCeleb1) and 9.7\% (SITW) relative improvement in term of equal error
rate (EER) from static MFCCs, without additional tuning effort.Comment: Accepted to ISCAS 202
A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings
Modern automatic speaker verification relies largely on deep neural networks
(DNNs) trained on mel-frequency cepstral coefficient (MFCC) features. While
there are alternative feature extraction methods based on phase, prosody and
long-term temporal operations, they have not been extensively studied with
DNN-based methods. We aim to fill this gap by providing extensive re-assessment
of 14 feature extractors on VoxCeleb and SITW datasets. Our findings reveal
that features equipped with techniques such as spectral centroids, group delay
function, and integrated noise suppression provide promising alternatives to
MFCCs for deep speaker embeddings extraction. Experimental results demonstrate
up to 16.3\% (VoxCeleb) and 25.1\% (SITW) relative decrease in equal error rate
(EER) to the baseline.Comment: Accepted to Interspeech 202
- …